-
Notifications
You must be signed in to change notification settings - Fork 57
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Dagger Instance #147
Dagger Instance #147
Conversation
braid(V::DagDom, W::DagDom) = | ||
MatrixThunk(delayed(x->x)(LinearMap(braid_lm(V.N), braid_lm(W.N), W.N+V.N, V.N+W.N)),V.N+W.N, W.N+V.N) | ||
|
||
mcopy(V::DagDom) = MatrixThunk(delayed(x->x)(LinearMap(mcopy_lm, plus_lm, 2*V.N, V.N)), V.N, 2*V.N) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think these should be MatrixThunk(delayed(x->LinearMap(mcopy_lm, plus_lm, 2*V.N, V.N)*x), V.N, 2*V.N)
etc.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I see what you're getting at with that, I had a hard time with this one (as well as braid and delete). I think as-is will cause a typeError, since delayed(x->LinearMap(mcopy_lm, plus_lm, 2*V.N, V.N)*x)
is not of type Thunk
.
From the way it's used in the LinearMaps instance, mcopy
returns a morphism (LinearMap object) which is then applied to other morphisms through composition. If we're keeping with that usage (applying mcopy
by composition), I think we'll want this to return a MatrixThunk which has an identity Thunk for the LinearMap mcopy
, which can then be used as an argument in a compose
call.
This acts to copy the internal LinearMap through composition.
@bosonbaas, I think this is a good start. I noted some stuff that can be simplified. And we need some tests before this is ready to merge. |
Thanks for the comments! It might be useful to wait on pulling this request until we transfer |
Very cool. Dagger is new to me, although I once used dask, which looks similar. Thanks for getting this started, Andrew. I agree that resolving #144 first makes sense. |
Currently the only tests I have are comparing the results of expressions evaluated as |
Here are the notes from our conversation today, I think we might want to have both the OpenDAG approach you implemented here and a "CT-native" approach like the following Thunk implementation. AbstractThunk
struct ThunkDom
types::Vector{DataType}
end
struct GeneratorThunk <: AbstractThunk
dom
codom
t::Thunk
end
CompositeThunk <: AbstractThunk
fs::Vector{AbstractThunk}
end
ProductThunk <: AbstractThunk
fs::Vector{AbstractThunk}
end
CopyThunk <: AbstractThunk
x::ThunkDom
end
BraidThunk <: AbstractThunk
x::ThunkDom
y::ThunkDom
end
# copy(X::Ob) → (x)↦CopyThunk(x,y)
# braid(X::Ob, Y::Ob) → (x,y)↦BraidThunk(x,y)
# f⊗g⋅h⋅Δ(x)... ↦ CompositeThunk(ProductThunk(f,g), h, CopyThunk(x)) |
I haven't looked at this carefully yet, but is the comment I made in #157 relevant here as well? |
Yes it is. |
@jpfairbanks and @bosonbaas, what are your plans for this experiment? Looks like a solid amount of work has gone into it. One possibility is to just merge it into the experiments basically as is. For that, we would just need to make sure that all the tests still pass and update the experiments GitHub CI action to run the tests (otherwise, bit rot is inevitable). |
I think we would want to update it to use something like |
I've made an |
I think it makes sense to pull the workflow parts of algebraic relations out into a new package and put this in that package as a DAG construction tool. Algebraic relations could still have the code that generated a schema from a Petri net presentation of an SMC, but the creation and execution of Workflows would go in the new package. Then when you use both Workflows and Relations, you would get checkpointing into SQL for free. |
Closing this PR, as it will be better suited to new LinAlg package if we ever pick it up again. |
This PR starts work on parallel computing with Catlab by making an instance of GLA for Dagger DAGs. This allows us to execute linear maps in parallel using the Dagger scheduler.